在带有多个扬声器的视频中,主动扬声器检测(ASD)是一项具有挑战性的任务,因为它需要在长时间的暂时窗口上学习有效的视听功能和时空相关性。在本文中,我们提出了一种新颖的时空图形学习框架,可以解决复杂的任务,例如ASD。为此,视频框架中的每个人首先在该框架的唯一节点中编码。对应于跨帧的单个人的节点已连接以编码其时间动力学。帧中的节点也连接到编码人际关系。因此,咒语将ASD减少到节点分类任务。重要的是,咒语能够在所有节点上为所有节点上的长时间环境推理,而无需依赖计算昂贵的完全连接的图形神经网络。通过对Ava-Activespeaker数据集进行的广泛实验,我们证明了基于图形的表示形式可以显着改善主动扬声器检测性能,因为其明确的空间和时间结构。拼写优于所有先前的最新方法,同时需要大大降低内存和计算资源。我们的代码可在https://github.com/sra2/spell上公开获取
translated by 谷歌翻译
Fine Tuning Target Tasks的连续提示最近被出现为完整模型微调的紧凑替代方案。这些有前途的结果的动机,我们调查了提取离散(文本)解释的可行性,持续提示忠于他们解决的问题。在实践中,我们在通过连续提示和最近的邻离分立投影解决的任务之间的“任性”行为:我们可以找到解决任务的连续提示,同时投射到任意文本(例如,不同甚至a的定义矛盾的任务),而在最佳连续提示的非常小(2%)的边缘内,对于任务相同的相同尺寸。我们提供这种奇怪和令人惊讶的行为背后的直觉,以及广泛的实证分析量化各种参数的效果。例如,对于更大的模型大小,我们观察到更高的任性,即,我们可以发现提示更紧密地映射到任何随意的任意文本,精度较小。这些调查结果与忠实地解释模型和任务持续提示及其概括的难度有关的重要意义,为提示语言模型的未来进展提供指导。
translated by 谷歌翻译
我们通过新的框架解决了主动扬声器检测问题,称为法术,从而了解远程多模式图来编码音频和视觉数据之间的模态关系。我们将主动扬声器检测作为了解长期依赖项的节点分类任务。我们首先从视频构造图形,以便每个节点对应一个人。表示在定义的时间窗口中它们之间相同身份的共享边缘的节点。同一视频帧中的节点也连接以编码人际交互。通过对AVA-ActiveSpeaker数据集的广泛实验,我们证明了基于学习的基于图形的表示,由于其明确的空间和时间结构,显着提高了整体性能。法术优于若干相关基线,并在现有技术的比例下执行,同时需要较小的计算成本阶数。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Language models have become increasingly popular in recent years for tasks like information retrieval. As use-cases become oriented toward specific domains, fine-tuning becomes default for standard performance. To fine-tune these models for specific tasks and datasets, it is necessary to carefully tune the model's hyperparameters and training techniques. In this paper, we present an in-depth analysis of the performance of four transformer-based language models on the task of biomedical information retrieval. The models we consider are DeepMind's RETRO (7B parameters), GPT-J (6B parameters), GPT-3 (175B parameters), and BLOOM (176B parameters). We compare their performance on the basis of relevance, accuracy, and interpretability, using a large corpus of 480000 research papers on protein structure/function prediction as our dataset. Our findings suggest that smaller models, with <10B parameters and fine-tuned on domain-specific datasets, tend to outperform larger language models on highly specific questions in terms of accuracy, relevancy, and interpretability by a significant margin (+50% on average). However, larger models do provide generally better results on broader prompts.
translated by 谷歌翻译
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
Language tasks involving character-level manipulations (e.g., spelling correction, many word games) are challenging for models based in subword tokenization. To address this, we adapt the interchange intervention training method of Geiger et al. (2021) to operate on type-level variables over characters. This allows us to encode robust, position-independent character-level information in the internal representations of subword-based models. We additionally introduce a suite of character-level tasks that systematically vary in their dependence on meaning and sequence-level context. While simple character-level tokenization approaches still perform best on purely form-based tasks like string reversal, our method is superior for more complex tasks that blend form, meaning, and context, such as spelling correction in context and word search games. Our approach also leads to subword-based models with human-intepretable internal representations of characters.
translated by 谷歌翻译
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data is stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of the data (synopsis) that closely replicates the behavior of the actual data, which can be useful where an approximate answer to the queries would be acceptable in a fraction of the real execution time. In this paper, we discuss the use of Generative Adversarial Networks (GANs) for generating tabular data that can be employed in AQP for synopsis construction. We first discuss the challenges associated with constructing synopses in relational databases and then introduce solutions to those challenges. Following that, we organized statistical metrics to evaluate the quality of the generated synopses. We conclude that tabular data complexity makes it difficult for algorithms to understand relational database semantics during training, and improved versions of tabular GANs are capable of constructing synopses to revolutionize data-driven decision-making systems.
translated by 谷歌翻译
Trajectory-User Linking (TUL) is a relatively new mobility classification task in which anonymous trajectories are linked to the users who generated them. With applications ranging from personalized recommendations to criminal activity detection, TUL has received increasing attention over the past five years. While research has focused mainly on learning deep representations that capture complex spatio-temporal mobility patterns unique to individual users, we demonstrate that visit patterns are highly unique among users and thus simple heuristics applied directly to the raw data are sufficient to solve TUL. More specifically, we demonstrate that a single check-in per trajectory is enough to correctly predict the identity of the user up to 85% of the time. Moreover, by using a non-parametric classifier, we scale up TUL to over 100k users which is an increase over state-of-the-art by three orders of magnitude. Extensive empirical analysis on four real-world datasets (Brightkite, Foursquare, Gowalla and Weeplaces) compares our findings to state-of-the-art results, and more importantly validates our claim that TUL is easier than commonly believed.
translated by 谷歌翻译